- Frontend: Built with PySide (Qt), providing a native Windows GUI for interacting with the assistant.
- Backend: Implemented in Python, handling model inference, task orchestration, and function execution.
- LLM Integration: Supports local models such as Mistral 7B, running through a combination of
llama-cpp-pythonfor local inference and theopenaiPython library for OpenAI-compatible functionality. - Task Execution: Functions can perform local PC operations, such as file management, running scripts, or querying system resources.
- Input/Output Validation: Ensures correct data types and prevents unexpected errors.
- Extensibility: New functions can be added simply by defining Python functions with specified inputs and outputs.
- Local Models: Using Mistral 7B for inference via
llama-cpp-python, allowing large models to run efficiently on local hardware. - OpenAI API: Optionally supports models through the
openaiPython library, enabling hybrid workflows. - Function Calling: The assistant can trigger local functions directly based on LLM outputs, enabling automation of complex tasks.
- Input for natural language queries or commands
- Display of LLM-generated outputs and results from executed functions
- Logging of executed tasks for debugging and reproducibility
- Local-first: All computation occurs on the user’s machine to protect sensitive data.
- Minimal Dependencies: Avoided heavy orchestration libraries to reduce overhead and improve transparency.
- Function Calling Framework: Provides a safe, extensible mechanism for LLMs to perform local tasks.
- Modular LLM Support: Any model compatible with
llama-cpp-pythonor the OpenAI API can be integrated seamlessly.
Rivridis Assistant: Local LLM Agent Architecture and Implementation
Rivridis Assistant is a local AI agent built for Windows that runs large language models (LLMs) entirely on the user’s machine. The system is designed for privacy, extensibility, and automation, allowing custom workflows without relying on cloud services.
System Overview
The architecture consists of three main components:
The system is modular, enabling easy integration of new models and tools.
Custom Function Calling Framework
Instead of relying on orchestration libraries like LangChain, a lightweight function calling framework was developed. Its features include:
This framework allows the LLM to interact safely with local resources while keeping the system lightweight and modular.
LLM Integration
Rivridis Assistant supports both local and remote models:
Frontend and Workflow
The GUI provides:
The frontend communicates with the backend via Python function calls, ensuring low latency and a responsive experience.
Key Design Decisions
Conclusion
Rivridis Assistant demonstrates a modular and extensible approach to building local LLM agents for desktop environments. By combining Python, PySide, local LLMs, and a custom function framework, it provides a foundation for secure, automation-focused AI tools that operate fully offline.